视觉惯性定位是计算机视觉和机器人技术应用中的关键问题,例如虚拟现实,自动驾驶汽车和航空车。目的是在已知环境或动力学时估计物体的准确姿势。最近的方法使用卷积和时空网络直接回归姿势。绝对姿势回归(APR)技术可预测已知场景中图像输入的绝对摄像头姿势。进程方法执行相对姿势回归(RPR),该方法可预测已知对象动态(视觉或惯性输入)的相对姿势。可以通过检索跨模式设置的两个数据源的信息来改进本地化任务,这是一个挑战性的问题,这是由于矛盾的任务。在这项工作中,我们进行了基准,以评估基于PGO和注意力网络的深层多模式融合。辅助和贝叶斯学习已整合到APR任务中。我们展示了RPR AD的APR任务的准确性改进以及用于航空车辆和手持设备的RPR-RPR任务。我们在Euroc Mav和Penncosyvio数据集上进行实验,并记录一个新颖的行业数据集。
translated by 谷歌翻译
对于许多应用,分析机器学习模型的不确定性是必不可少的。尽管不确定性量化(UQ)技术的研究对于计算机视觉应用非常先进,但对时空数据的UQ方法的研究较少。在本文中,我们专注于在线手写识别的模型,这是一种特定类型的时空数据。数据是从传感器增强的笔中观察到的,其目标是对书面字符进行分类。我们基于两种突出的贝叶斯推理,平均高斯(赃物)和深层合奏的突出技术对核心(数据)和认知(模型)UQ进行了广泛的评估。在对模型的更好理解后,UQ技术可以在组合右手和左撇子作家(一个代表性不足的组)时检测分布数据和域的变化。
translated by 谷歌翻译
当机器学习模型将其应用于与最初训练的数据相似但不同的域中的数据时,它的性能会降低。为了减轻此域移位问题,域Adaptation(DA)技术搜索了最佳转换,该转换将(当前)输入数据从源域转换为目标域,以学习域名不变的表示,以减少域差异。本文根据两个步骤提出了一个新颖的监督DA。首先,我们从几个样本中搜索从源到目标域的最佳类依赖性转换。我们考虑了最佳的运输方法,例如地球搬运工的距离,凹痕传输和相关对准。其次,我们使用嵌入相似技术在推理时选择相应的转换。我们使用相关指标和高阶矩匹配技术。我们对具有域移动的时间序列数据集进行了广泛的评估,包括模拟和各种在线手写数据集,以演示性能。
translated by 谷歌翻译
目的。手写是日常生活中最常见的模式之一,由于它具有挑战性的应用,例如手写识别(HWR),作家识别和签名验证。与仅使用空间信息(即图像)的离线HWR相反,在线HWR(ONHWR)使用更丰富的时空信息(即轨迹数据或惯性数据)。尽管存在许多离线HWR数据集,但只有很少的数据可用于开发纸质上的ONHWR方法,因为它需要硬件集成的笔。方法。本文为实时序列到序列(SEQ2SEQ)学习和基于单个字符的识别提供了数据和基准模型。我们的数据由传感器增强的圆珠笔记录,从三轴加速度计,陀螺仪,磁力计和力传感器100 \,\ textit {hz}产生传感器数据流。我们建议各种数据集,包括与作者依赖和作者无关的任务的方程式和单词。我们的数据集允许在平板电脑上的经典ONHWR与传感器增强笔之间进行比较。我们使用经常性和时间卷积网络和变压器与连接派时间分类(CTC)损失(CTC)损失(CE)损失,为SEQ2SEQ和基于单个字符的HWR提供了评估基准。结果。我们的卷积网络与Bilstms相结合,优于基于变压器的架构,与基于序列的分类任务的启动时间相提并论,并且与28种最先进的技术相比,结果更好。时间序列扩展方法改善了基于序列的任务,我们表明CE变体可以改善单个分类任务。
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译